Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
PLoS One ; 18(8): e0289930, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37647308

RESUMO

Machine learning (ML) is increasingly applied to predict adverse postoperative outcomes in cardiac surgery. Commonly used ML models fail to translate to clinical practice due to absent model explainability, limited uncertainty quantification, and no flexibility to missing data. We aimed to develop and benchmark a novel ML approach, the uncertainty-aware attention network (UAN), to overcome these common limitations. Two Bayesian uncertainty quantification methods were tested, generalized variational inference (GVI) or a posterior network (PN). The UAN models were compared with an ensemble of XGBoost models and a Bayesian logistic regression model (LR) with imputation. The derivation datasets consisted of 153,932 surgery events from the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) Cardiac Surgery Database. An external validation consisted of 7343 surgery events which were extracted from the Medical Information Mart for Intensive Care (MIMIC) III critical care dataset. The highest performing model on the external validation dataset was a UAN-GVI with an area under the receiver operating characteristic curve (AUC) of 0.78 (0.01). Model performance improved on high confidence samples with an AUC of 0.81 (0.01). Confidence calibration for aleatoric uncertainty was excellent for all models. Calibration for epistemic uncertainty was more variable, with an ensemble of XGBoost models performing the best with an AUC of 0.84 (0.08). Epistemic uncertainty was improved using the PN approach, compared to GVI. UAN is able to use an interpretable and flexible deep learning approach to provide estimates of model uncertainty alongside state-of-the-art predictions. The model has been made freely available as an easy-to-use web application demonstrating that by designing uncertainty-aware models with innately explainable predictions deep learning may become more suitable for routine clinical use.


Assuntos
Procedimentos Cirúrgicos Cardíacos , Lepidópteros , Animais , Teorema de Bayes , Incerteza , Austrália , Aprendizado de Máquina , Redes Neurais de Computação
2.
Front Cardiovasc Med ; 10: 1211600, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37492161

RESUMO

Objectives: Machine learning (ML) classification tools are known to accurately predict many cardiac surgical outcomes. A novel approach, ML-based survival analysis, remains unstudied for predicting mortality after cardiac surgery. We aimed to benchmark performance, as measured by the concordance index (C-index), of tree-based survival models against Cox proportional hazards (CPH) modeling and explore risk factors using the best-performing model. Methods: 144,536 patients with 147,301 surgery events from the Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) national database were used to train and validate models. Univariate analysis was performed using Student's T-test for continuous variables, Chi-squared test for categorical variables, and stratified Kaplan-Meier estimation of the survival function. Three ML models were tested, a decision tree (DT), random forest (RF), and gradient boosting machine (GBM). Hyperparameter tuning was performed using a Bayesian search strategy. Performance was assessed using 2-fold cross-validation repeated 5 times. Results: The highest performing model was the GBM with a C-index of 0.803 (0.002), followed by RF with 0.791 (0.003), DT with 0.729 (0.014), and finally CPH with 0.596 (0.042). The 5 most predictive features were age, type of procedure, length of hospital stay, drain output in the first 4 h (ml), and inotrope use greater than 4 h postoperatively. Conclusion: Tree-based learning for survival analysis is a non-parametric and performant alternative to CPH modeling. GBMs offer interpretable modeling of non-linear relationships, promising to expose the most relevant risk factors and uncover new questions to guide future research.

3.
Psychiatry Res ; 327: 115265, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37348404

RESUMO

Cluster analyzes have been widely used in mental health research to decompose inter-individual heterogeneity by identifying more homogeneous subgroups of individuals. However, despite advances in new algorithms and increasing popularity, there is little guidance on model choice, analytical framework and reporting requirements. In this paper, we aimed to address this gap by introducing the philosophy, design, advantages/disadvantages and implementation of major algorithms that are particularly relevant in mental health research. Extensions of basic models, such as kernel methods, deep learning, semi-supervised clustering, and clustering ensembles are subsequently introduced. How to choose algorithms to address common issues as well as methods for pre-clustering data processing, clustering evaluation and validation are then discussed. Importantly, we also provide general guidance on clustering workflow and reporting requirements. To facilitate the implementation of different algorithms, we provide information on R functions and libraries.


Assuntos
Algoritmos , Saúde Mental , Humanos , Análise por Conglomerados
4.
Data Min Knowl Discov ; 37(2): 788-832, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36504672

RESUMO

Recent trends in the Machine Learning (ML) and in particular Deep Learning (DL) domains have demonstrated that with the availability of massive amounts of time series, ML and DL techniques are competitive in time series forecasting. Nevertheless, the different forms of non-stationarities associated with time series challenge the capabilities of data-driven ML models. Furthermore, due to the domain of forecasting being fostered mainly by statisticians and econometricians over the years, the concepts related to forecast evaluation are not the mainstream knowledge among ML researchers. We demonstrate in our work that as a consequence, ML researchers oftentimes adopt flawed evaluation practices which results in spurious conclusions suggesting methods that are not competitive in reality to be seemingly competitive. Therefore, in this work we provide a tutorial-like compilation of the details associated with forecast evaluation. This way, we intend to impart the information associated with forecast evaluation to fit the context of ML, as means of bridging the knowledge gap between traditional methods of forecasting and adopting current state-of-the-art ML techniques.We elaborate the details of the different problematic characteristics of time series such as non-normality and non-stationarities and how they are associated with common pitfalls in forecast evaluation. Best practices in forecast evaluation are outlined with respect to the different steps such as data partitioning, error calculation, statistical testing, and others. Further guidelines are also provided along selecting valid and suitable error measures depending on the specific characteristics of the dataset at hand.

5.
J Card Surg ; 37(11): 3838-3845, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36001761

RESUMO

BACKGROUND: Machine learning (ML) models are promising tools for predicting adverse postoperative outcomes in cardiac surgery, yet have not translated to routine clinical use. We conducted a systematic review and meta-analysis to assess the predictive performance of ML approaches. METHODS: We conducted an electronic search to find studies assessing ML and traditional statistical models to predict postoperative outcomes. Our primary outcome was the concordance (C-) index of discriminative performance. Using a Bayesian meta-analytic approach we pooled the C-indices with the 95% credible interval (CrI) across multiple outcomes comparing ML methods to logistic regression (LR) and clinical scoring tools. Additionally, we performed critical difference and sensitivity analysis. RESULTS: We identified 2792 references from the search of which 51 met inclusion criteria. Two postoperative outcomes were amenable for meta-analysis: 30-day mortality and in-hospital mortality. For 30-day mortality, the pooled C-index and 95% CrI were 0.82 (0.79-0.85), 0.80 (0.77-0.84), 0.78 (0.74-0.82) for ML models, LR, and scoring tools respectively. For in-hospital mortality, the pooled C-index was 0.81 (0.78-0.84) and 0.79 (0.73-0.84) for ML models and LR, respectively. There were no statistically significant results indicating ML superiority over LR. CONCLUSION: In cardiac surgery patients, for the prediction of mortality, current ML methods do not have greater discriminative power over LR as measured by the C-index.


Assuntos
Procedimentos Cirúrgicos Cardíacos , Aprendizado de Máquina , Teorema de Bayes , Procedimentos Cirúrgicos Cardíacos/efeitos adversos , Mortalidade Hospitalar , Humanos , Modelos Logísticos
6.
Artigo em Inglês | MEDLINE | ID: mdl-35853064

RESUMO

We introduce a novel method to estimate the causal effects of an intervention over multiple treated units by combining the techniques of probabilistic forecasting with global forecasting methods using deep learning (DL) models. Considering the counterfactual and synthetic approach for policy evaluation, we recast the causal effect estimation problem as a counterfactual prediction outcome of the treated units in the absence of the treatment. Nevertheless, in contrast to estimating only the counterfactual time series outcome, our work differs from conventional methods by proposing to estimate the counterfactual time series probability distribution based on the past preintervention set of treated and untreated time series. We rely on time series properties and forecasting methods, with shared parameters, applied to stacked univariate time series for causal identification. This article presents DeepProbCP, a framework for producing accurate quantile probabilistic forecasts for the counterfactual outcome, based on training a global autoregressive recurrent neural network model with conditional quantile functions on a large set of related time series. The output of the proposed method is the counterfactual outcome as the spline-based representation of the counterfactual distribution. We demonstrate how this probabilistic methodology added to the global DL technique to forecast the counterfactual trend and distribution outcomes overcomes many challenges faced by the baseline approaches to the policy evaluation problem. Oftentimes, some target interventions affect only the tails or the variance of the treated units' distribution rather than the mean or median, which is usual for skewed or heavy-tailed distributions. Under this scenario, the classical causal effect models based on counterfactual predictions are not capable of accurately capturing or even seeing policy effects. By means of empirical evaluations of synthetic and real-world datasets, we show that our framework delivers more accurate forecasts than the state-of-the-art models, depicting, in which quantiles, the intervention most affected the treated units, unlike the conventional counterfactual inference methods based on nonprobabilistic approaches.

7.
Crit Care Med ; 50(3): e263-e271, 2022 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-34637423

RESUMO

OBJECTIVES: Current definitions of acute kidney injury use a urine output threshold of less than 0.5 mL/kg/hr, which have not been validated in the modern era. We aimed to determine the prognostic importance of urine output within the first 24 hours of admission to the ICU and to evaluate for variance between different admission diagnoses. DESIGN: Retrospective cohort study. SETTING: One-hundred eighty-three ICUs throughout Australia and New Zealand from 2006 to 2016. PATIENTS: Patients greater than or equal to 16 years old who were admitted with curative intent who did not regularly receive dialysis. ICU readmissions during the same hospital admission and patients transferred from an external ICU were excluded. MEASUREMENTS AND MAIN RESULTS: One hundred and sixty-one thousand nine hundred forty patients were included with a mean urine output of 1.05 mL/kg/hr and an overall in-hospital mortality of 7.8%. A urine output less than 0.47 mL/kg/hr was associated with increased unadjusted in-hospital mortality, which varied with admission diagnosis. A machine learning model (extreme gradient boosting) was trained to predict in-hospital mortality and examine interactions between urine output and survival. Low urine output was most strongly associated with mortality in postoperative cardiovascular patients, nonoperative gastrointestinal admissions, nonoperative renal/genitourinary admissions, and patients with sepsis. CONCLUSIONS: Consistent with current definitions of acute kidney injury, a urine output threshold of less than 0.5 mL/kg/hr is modestly predictive of mortality in patients admitted to the ICU. The relative importance of urine output for predicting survival varies with admission diagnosis.


Assuntos
Injúria Renal Aguda/mortalidade , Injúria Renal Aguda/urina , Estado Terminal/mortalidade , Unidades de Terapia Intensiva , Injúria Renal Aguda/diagnóstico , Adulto , Idoso , Austrália , Feminino , Mortalidade Hospitalar , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Nova Zelândia , Prognóstico , Estudos Retrospectivos , Adulto Jovem
8.
Data Min Knowl Discov ; 35(3): 1032-1060, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33727888

RESUMO

This paper studies time series extrinsic regression (TSER): a regression task of which the aim is to learn the relationship between a time series and a continuous scalar variable; a task closely related to time series classification (TSC), which aims to learn the relationship between a time series and a categorical class label. This task generalizes time series forecasting, relaxing the requirement that the value predicted be a future value of the input series or primarily depend on more recent values. In this paper, we motivate and study this task, and benchmark existing solutions and adaptations of TSC algorithms on a novel archive of 19 TSER datasets which we have assembled. Our results show that the state-of-the-art TSC algorithm Rocket, when adapted for regression, achieves the highest overall accuracy compared to adaptations of other TSC algorithms and state-of-the-art machine learning (ML) algorithms such as XGBoost, Random Forest and Support Vector Regression. More importantly, we show that much research is needed in this field to improve the accuracy of ML models. We also find evidence that further research has excellent prospects of improving upon these straightforward baselines.

9.
IEEE Trans Neural Netw Learn Syst ; 32(4): 1586-1599, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32324575

RESUMO

Generating forecasts for time series with multiple seasonal cycles is an important use case for many industries nowadays. Accounting for the multiseasonal patterns becomes necessary to generate more accurate and meaningful forecasts in these contexts. In this article, we propose long short-term memory multiseasonal net (LSTM-MSNet), a decomposition-based unified prediction framework to forecast time series with multiple seasonal patterns. The current state of the art in this space is typically univariate methods, in which the model parameters of each time series are estimated independently. Consequently, these models are unable to include key patterns and structures that may be shared by a collection of time series. In contrast, LSTM-MSNet is a globally trained LSTM network, where a single prediction model is built across all the available time series to exploit the cross-series knowledge in a group of related time series. Furthermore, our methodology combines a series of state-of-the-art multiseasonal decomposition techniques to supplement the LSTM learning procedure. In our experiments, we are able to show that on data sets from disparate data sources, e.g., the popular M4 forecasting competition, a decomposition step is beneficial, whereas, in the common real-world situation of homogeneous series from a single application, exogenous seasonal variables or no seasonal preprocessing at all are better choices. All options are readily included in the framework and allow us to achieve competitive results for both cases, outperforming many state-of-the-art multiseasonal forecasting methods.

10.
Semin Thorac Cardiovasc Surg ; 33(3): 735-745, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32979479

RESUMO

Using a large national database of cardiac surgical procedures, we applied machine learning (ML) to risk stratification and profiling for cardiac surgery-associated acute kidney injury. We compared performance of ML to established scoring tools. Four ML algorithms were used, including logistic regression (LR), gradient boosted machine (GBM), K-nearest neighbor, and neural networks (NN). These were compared to the Cleveland Clinic score, and a risk score developed on the same database. Five-fold cross-validation repeated 20 times was used to measure the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Risk profiles from GBM and NN were generated using Shapley additive values. A total of 97,964 surgery events in 96,653 patients were included. For predicting postoperative renal replacement therapy using pre- and intraoperative data, LR, GBM, and NN achieved an AUC (standard deviation) of 0.84 (0.01), 0.85 (0.01), 0.84 (0.01) respectively outperforming the highest performing scoring tool with 0.81 (0.004). For predicting cardiac surgery-associated acute kidney injury, LR, GBM, and NN each achieved 0.77 (0.01), 0.78 (0.01), 0.77 (0.01) respectively outperforming the scoring tool with 0.75 (0.004). Compared to scores and LR, shapely additive values analysis of black box model predictions was able to generate patient-level explanations for each prediction. ML algorithms provide state-of-the-art approaches to risk stratification. Explanatory modeling can exploit complex decision boundaries to aid the clinician in understanding the risks specific to individual patients.


Assuntos
Injúria Renal Aguda , Procedimentos Cirúrgicos Cardíacos , Injúria Renal Aguda/diagnóstico , Injúria Renal Aguda/etiologia , Algoritmos , Procedimentos Cirúrgicos Cardíacos/efeitos adversos , Humanos , Modelos Logísticos , Aprendizado de Máquina , Fatores de Risco
11.
J Asthma ; 57(4): 398-404, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-30701997

RESUMO

Objective: To compare the characteristics, use of invasive ventilation and outcomes of patients admitted with critical asthma syndrome (CAS) to ICUs in Australia and New Zealand (ANZ), and a large cohort of ICUs in the United States (US). Methods: We examined two large databases of ICU for patients admitted with CAS in 2014 and 2015. We obtained, analyzed, and compared information on demographic and physiological characteristics, use of invasive mechanical ventilation, and clinical outcome and derived predictive models. Results: Overall, 2202 and 762 patients were admitted with a primary diagnosis of CAS in the ANZ and US databases respectively (0.73% vs. 0.46% of all ICU admissions, P < 0.001). A similar percentage of patients received invasive mechanical ventilation in the first 24 h (24.7% vs. 24.4%, P = 0.87) but ANZ patients had lower respiratory rates and higher PaCO2 levels. Overall mortality was low (1.23 for ANZ and 1.71 for USA; P = 0.36) and even among invasively ventilated patients (2.4% for ANZ vs. 1.1% for USA; P = 0.38). However, ANZ patients also had longer length of stay in ICU (43 vs. 37 h, P = 0.001) and hospital (105 vs. 78 h, P = 0.003). Conclusions: Patients admitted to ANZ and USA ICU with CAS are broadly similar and have a low and similar rate of invasive ventilation and mortality. However, ANZ patients made up a greater proportion of ICU patients and had longer ICU and hospital stays. These findings provide a modern invasive ventilation and mortality rates benchmark for future studies of CAS.


Assuntos
Asma/terapia , Comparação Transcultural , Unidades de Terapia Intensiva/estatística & dados numéricos , Adulto , Asma/mortalidade , Austrália/epidemiologia , Estudos de Coortes , Bases de Dados Factuais/estatística & dados numéricos , Feminino , Mortalidade Hospitalar , Humanos , Tempo de Internação/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Nova Zelândia/epidemiologia , Respiração Artificial/estatística & dados numéricos , Resultado do Tratamento , Estados Unidos/epidemiologia
12.
J Clin Med ; 8(9)2019 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-31491944

RESUMO

Clinical audit of invasive mold disease (IMD) in hematology patients is inefficient due to the difficulties of case finding. This results in antifungal stewardship (AFS) programs preferentially reporting drug cost and consumption rather than measures that actually reflect quality of care. We used machine learning-based natural language processing (NLP) to non-selectively screen chest tomography (CT) reports for pulmonary IMD, verified by clinical review against international definitions and benchmarked against key AFS measures. NLP screened 3014 reports from 1 September 2008 to 31 December 2017, generating 784 positives that after review, identified 205 IMD episodes (44% probable-proven) in 185 patients from 50,303 admissions. Breakthrough-probable/proven-IMD on antifungal prophylaxis accounted for 60% of episodes with serum monitoring of voriconazole or posaconazole in the 2 weeks prior performed in only 53% and 69% of episodes, respectively. Fiberoptic bronchoscopy within 2 days of CT scan occurred in only 54% of episodes. The average turnaround of send-away bronchoalveolar galactomannan of 12 days (range 7-22) was associated with high empiric liposomal amphotericin consumption. A random audit of 10% negative reports revealed two clinically significant misses (0.9%, 2/223). This is the first successful use of applied machine learning for institutional IMD surveillance across an entire hematology population describing process and outcome measures relevant to AFS. Compared to current methods of clinical audit, semi-automated surveillance using NLP is more efficient and inclusive by avoiding restrictions based on any underlying hematologic condition, and has the added advantage of being potentially scalable.

13.
PLoS Med ; 15(11): e1002709, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30500816

RESUMO

BACKGROUND: Resuscitated cardiac arrest is associated with high mortality; however, the ability to estimate risk of adverse outcomes using existing illness severity scores is limited. Using in-hospital data available within the first 24 hours of admission, we aimed to develop more accurate models of risk prediction using both logistic regression (LR) and machine learning (ML) techniques, with a combination of demographic, physiologic, and biochemical information. METHODS AND FINDINGS: Patient-level data were extracted from the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database for patients who had experienced a cardiac arrest within 24 hours prior to admission to an intensive care unit (ICU) during the period January 2006 to December 2016. The primary outcome was in-hospital mortality. The models were trained and tested on a dataset (split 90:10) including age, lowest and highest physiologic variables during the first 24 hours, and key past medical history. LR and 5 ML approaches (gradient boosting machine [GBM], support vector classifier [SVC], random forest [RF], artificial neural network [ANN], and an ensemble) were compared to the APACHE III and Australian and New Zealand Risk of Death (ANZROD) predictions. In all, 39,566 patients from 186 ICUs were analysed. Mean (±SD) age was 61 ± 17 years; 65% were male. Overall in-hospital mortality was 45.5%. Models were evaluated in the test set. The APACHE III and ANZROD scores demonstrated good discrimination (area under the receiver operating characteristic curve [AUROC] = 0.80 [95% CI 0.79-0.82] and 0.81 [95% CI 0.8-0.82], respectively) and modest calibration (Brier score 0.19 for both), which was slightly improved by LR (AUROC = 0.82 [95% CI 0.81-0.83], DeLong test, p < 0.001). Discrimination was significantly improved using ML models (ensemble and GBM AUROCs = 0.87 [95% CI 0.86-0.88], DeLong test, p < 0.001), with an improvement in performance (Brier score reduction of 22%). Explainability models were created to assist in identifying the physiologic features that most contributed to an individual patient's survival. Key limitations include the absence of pre-hospital data and absence of external validation. CONCLUSIONS: ML approaches significantly enhance predictive discrimination for mortality following cardiac arrest compared to existing illness severity scores and LR, without the use of pre-hospital data. The discriminative ability of these ML models requires validation in external cohorts to establish generalisability.


Assuntos
Reanimação Cardiopulmonar/mortalidade , Técnicas de Apoio para a Decisão , Parada Cardíaca/mortalidade , Mortalidade Hospitalar , Aprendizado de Máquina , Idoso , Austrália , Reanimação Cardiopulmonar/efeitos adversos , Tomada de Decisão Clínica , Bases de Dados Factuais , Feminino , Nível de Saúde , Parada Cardíaca/diagnóstico , Parada Cardíaca/terapia , Humanos , Masculino , Pessoa de Meia-Idade , Nova Zelândia , Sistema de Registros , Estudos Retrospectivos , Medição de Risco , Fatores de Risco , Fatores de Tempo , Resultado do Tratamento
14.
PLoS One ; 12(12): e0188688, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29281665

RESUMO

INTRODUCTION: Hospitals have seen a rise in Medical Emergency Team (MET) reviews. We hypothesised that the commonest MET calls result in similar treatments. Our aim was to design a pre-emptive management algorithm that allowed direct institution of treatment to patients without having to wait for attendance of the MET team and to model its potential impact on MET call incidence and patient outcomes. METHODS: Data was extracted for all MET calls from the hospital database. Association rule data mining techniques were used to identify the most common combinations of MET call causes, outcomes and therapies. RESULTS: There were 13,656 MET calls during the 34-month study period in 7936 patients. The most common MET call was for hypotension [31%, (2459/7936)]. These MET calls were strongly associated with the immediate administration of intra-venous fluid (70% [1714/2459] v 13% [739/5477] p<0.001), unless the patient was located on a respiratory ward (adjusted OR 0.41 [95%CI 0.25-0.67] p<0.001), had a cardiac cause for admission (adjusted OR 0.61 [95%CI 0.50-0.75] p<0.001) or was under the care of the heart failure team (adjusted OR 0.29 [95%CI 0.19-0.42] p<0.001). Modelling the effect of a pre-emptive management algorithm for immediate fluid administration without MET activation on data from a test period of 24 months following the study period, suggested it would lead to a 68.7% (2541/3697) reduction in MET calls for hypotension and a 19.6% (2541/12938) reduction in total METs without adverse effects on patients. CONCLUSION: Routinely collected data and analytic techniques can be used to develop a pre-emptive management algorithm to administer intravenous fluid therapy to a specific group of hypotensive patients without the need to initiate a MET call. This could both lead to earlier treatment for the patient and less total MET calls.


Assuntos
Eficiência Organizacional , Serviço Hospitalar de Emergência/organização & administração , Equipe de Respostas Rápidas de Hospitais/organização & administração , Segurança do Paciente , Algoritmos , Interpretação Estatística de Dados , Doença/classificação , Humanos
15.
PLoS One ; 12(5): e0176570, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28464035

RESUMO

PURPOSE: Comparisons between institutions of intensive care unit (ICU) length of stay (LOS) are significantly confounded by individual patient characteristics, and currently there is a paucity of methods available to calculate risk-adjusted metrics. METHODS: We extracted de-identified data from the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database for admissions between January 1 2011 and December 31 2015. We used a mixed-effects log-normal regression model to predict LOS using patient and admission characteristics. We calculated a risk-adjusted LOS ratio (RALOSR) by dividing the geometric mean observed LOS by the exponent of the expected Ln-LOS for each site and year. The RALOSR is scaled such that values <1 indicate a LOS shorter than expected, while values >1 indicate a LOS longer than expected. Secondary mixed effects regression modelling was used to assess the stability of the estimate in units over time. RESULTS: During the study there were a total of 662,525 admissions to 168 units (median annual admissions = 767, IQR:426-1121). The mean observed LOS was 3.21 days (median = 1.79 IQR = 0.92-3.52) over the entire period, and declined on average 1.97 hours per year (95%CI:1.76-2.18) from 2011 to 2015. The RALOSR varied considerably between units, ranging from 0.35 to 2.34 indicating large differences after accounting for case-mix. CONCLUSIONS: There are large disparities in risk-adjusted LOS among Australian and New Zealand ICUs which may reflect differences in resource utilization.


Assuntos
Unidades de Terapia Intensiva/estatística & dados numéricos , Tempo de Internação/estatística & dados numéricos , Idoso , Austrália , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Nova Zelândia , Medição de Risco
16.
JCO Clin Cancer Inform ; 1: 1-10, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-30657390

RESUMO

PURPOSE: Prospective epidemiologic surveillance of invasive mold disease (IMD) in hematology patients is hampered by the absence of a reliable laboratory prompt. This study develops an expert system for electronic surveillance of IMD that combines probabilities using natural language processing (NLP) of computed tomography (CT) reports with microbiology and antifungal drug data to improve prediction of IMD. METHODS: Microbiology indicators and antifungal drug-dispensing data were extracted from hospital information systems at three tertiary hospitals for 123 hematology-oncology patients. Of this group, 64 case patients had 26 probable/proven IMD according to international definitions, and 59 patients were uninfected controls. Derived probabilities from NLP combined with medical expertise identified patients at high likelihood of IMD, with remaining patients processed by a machine-learning classifier trained on all available features. RESULTS: Compared with the baseline text classifier, the expert system that incorporated the best performing algorithm (naïve Bayes) improved specificity from 50.8% (95% CI, 37.5% to 64.1%) to 74.6% (95% CI, 61.6% to 85.0%), reducing false positives by 48% from 29 to 15; improved sensitivity slightly from 96.9% (95% CI, 89.2% to 99.6%) to 98.4% (95% CI, 91.6% to 100%); and improved receiver operating characteristic area from 73.9% (95% CI, 67.1% to 80.6%) to 92.8% (95% CI, 88% to 97.5%). CONCLUSION: An expert system that uses multiple sources of data (CT reports, microbiology, antifungal drug dispensing) is a promising approach to continuous prospective surveillance of IMD in the hospital, and demonstrates reduced false notifications (positives) compared with NLP of CT reports alone. Our expert system could provide decision support for IMD surveillance, which is critical to antifungal stewardship and improving supportive care in cancer.


Assuntos
Infecções Fúngicas Invasivas/diagnóstico , Infecções Fúngicas Invasivas/terapia , Oncologia , Monitorização Fisiológica/métodos , Neoplasias/diagnóstico , Neoplasias/terapia , Telemedicina/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Antifúngicos/uso terapêutico , Estudos de Casos e Controles , Terapia Combinada , Registros Eletrônicos de Saúde , Sistemas Inteligentes , Feminino , Humanos , Infecções Fúngicas Invasivas/etiologia , Aprendizado de Máquina , Masculino , Oncologia/métodos , Técnicas Microbiológicas , Pessoa de Meia-Idade , Processamento de Linguagem Natural , Neoplasias/complicações , Curva ROC , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X , Adulto Jovem
17.
Comput Methods Programs Biomed ; 107(3): 497-512, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22306072

RESUMO

In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.


Assuntos
Núcleo Celular/metabolismo , Processamento de Sinais Assistido por Computador , Neoplasias do Colo do Útero/diagnóstico , Neoplasias do Colo do Útero/fisiopatologia , Algoritmos , Automação , Citoplasma/metabolismo , Detecção Precoce de Câncer/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Internet , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Software
18.
IEEE Trans Neural Netw Learn Syst ; 23(11): 1841-7, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24808077

RESUMO

In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...